55 research outputs found
Self-Calibration and Biconvex Compressive Sensing
The design of high-precision sensing devises becomes ever more difficult and
expensive. At the same time, the need for precise calibration of these devices
(ranging from tiny sensors to space telescopes) manifests itself as a major
roadblock in many scientific and technological endeavors. To achieve optimal
performance of advanced high-performance sensors one must carefully calibrate
them, which is often difficult or even impossible to do in practice. In this
work we bring together three seemingly unrelated concepts, namely
Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea
behind self-calibration is to equip a hardware device with a smart algorithm
that can compensate automatically for the lack of calibration. We show how
several self-calibration problems can be treated efficiently within the
framework of biconvex compressive sensing via a new method called SparseLift.
More specifically, we consider a linear system of equations y = DAx, where both
x and the diagonal matrix D (which models the calibration error) are unknown.
By "lifting" this biconvex inverse problem we arrive at a convex optimization
problem. By exploiting sparsity in the signal model, we derive explicit
theoretical guarantees under which both x and D can be recovered exactly,
robustly, and numerically efficiently via linear programming. Applications in
array calibration and wireless communications are discussed and numerical
simulations are presented, confirming and complementing our theoretical
analysis
Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing
We study the question of extracting a sequence of functions
from observing only the sum of
their convolutions, i.e., from . While convex optimization techniques
are able to solve this joint blind deconvolution-demixing problem provably and
robustly under certain conditions, for medium-size or large-size problems we
need computationally faster methods without sacrificing the benefits of
mathematical rigor that come with convex methods. In this paper, we present a
non-convex algorithm which guarantees exact recovery under conditions that are
competitive with convex optimization methods, with the additional advantage of
being computationally much more efficient. Our two-step algorithm converges to
the global minimum linearly and is also robust in the presence of additive
noise. While the derived performance bounds are suboptimal in terms of the
information-theoretic limit, numerical simulations show remarkable performance
even if the number of measurements is close to the number of degrees of
freedom. We discuss an application of the proposed framework in wireless
communications in connection with the Internet-of-Things.Comment: Accepted to Information and Inference: a Journal of the IM
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
We study the question of reconstructing two signals and from their
convolution . This problem, known as {\em blind deconvolution},
pervades many areas of science and technology, including astronomy, medical
imaging, optics, and wireless communications. A key challenge of this intricate
non-convex optimization problem is that it might exhibit many local minima. We
present an efficient numerical algorithm that is guaranteed to recover the
exact solution, when the number of measurements is (up to log-factors) slightly
larger than the information-theoretical minimum, and under reasonable
conditions on and . The proposed regularized gradient descent algorithm
converges at a geometric rate and is provably robust in the presence of noise.
To the best of our knowledge, our algorithm is the first blind deconvolution
algorithm that is numerically efficient, robust against noise, and comes with
rigorous recovery guarantees under certain subspace conditions. Moreover,
numerical experiments do not only provide empirical verification of our theory,
but they also demonstrate that our method yields excellent performance even in
situations beyond our theoretical framework
Neural Collapse for Unconstrained Feature Model under Cross-entropy Loss with Imbalanced Data
Recent years have witnessed the huge success of deep neural networks (DNNs)
in various tasks of computer vision and text processing. Interestingly, these
DNNs with massive number of parameters share similar structural properties on
their feature representation and last-layer classifier at terminal phase of
training (TPT). Specifically, if the training data are balanced (each class
shares the same number of samples), it is observed that the feature vectors of
samples from the same class converge to their corresponding in-class mean
features and their pairwise angles are the same. This fascinating phenomenon is
known as Neural Collapse (N C), first termed by Papyan, Han, and Donoho in
2019. Many recent works manage to theoretically explain this phenomenon by
adopting so-called unconstrained feature model (UFM). In this paper, we study
the extension of N C phenomenon to the imbalanced data under cross-entropy loss
function in the context of unconstrained feature model. Our contribution is
multi-fold compared with the state-of-the-art results: (a) we show that the
feature vectors exhibit collapse phenomenon, i.e., the features within the same
class collapse to the same mean vector; (b) the mean feature vectors no longer
form an equiangular tight frame. Instead, their pairwise angles depend on the
sample size; (c) we also precisely characterize the sharp threshold on which
the minority collapse (the feature vectors of the minority groups collapse to
one single vector) will take place; (d) finally, we argue that the effect of
the imbalance in datasize diminishes as the sample size grows. Our results
provide a complete picture of the N C under the cross-entropy loss for the
imbalanced data. Numerical experiments confirm our theoretical analysis.Comment: 38 pages, 10 figure
- β¦